Stratified filtered sampling in stochastic optimization

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stratified filtered sampling in stochastic optimization

We develop a methodology for evaluating a decision strategy generated by a stochastic optimization model. The methodology is based on a pilot study in which we estimate the distribution of performance associated with the strategy, and define an appropriate stratified sampling plan. An algorithm we call filtered search allows us to implement this plan efficiently. We demonstrate the approach’s a...

متن کامل

Stochastic Optimization with Importance Sampling

Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Gradient Descent (prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a ra...

متن کامل

Stochastic Optimization with Bandit Sampling

Many stochastic optimization algorithms work by estimating the gradient of the cost function on the fly by sampling datapoints uniformly at random from a training set. However, the estimator might have a large variance, which inadvertantly slows down the convergence rate of the algorithms. One way to reduce this variance is to sample the datapoints from a carefully selected non-uniform distribu...

متن کامل

Sampling Bounds for Stochastic Optimization

A large class of stochastic optimization problems can be modeled as minimizing an objective function f that depends on a choice of a vector x ∈ X, as well as on a random external parameter ω ∈ Ω given by a probability distribution π. The value of the objective function is a random variable and often the goal is to find an x ∈ X to minimize the expected cost Eω[fω(x)]. Each ω is referred to as a...

متن کامل

Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling

Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratif...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Applied Mathematics and Decision Sciences

سال: 2000

ISSN: 1173-9126

DOI: 10.1155/s117391260000002x